75 research outputs found

    ML4PG in Computer Algebra verification

    Full text link
    ML4PG is a machine-learning extension that provides statistical proof hints during the process of Coq/SSReflect proof development. In this paper, we use ML4PG to find proof patterns in the CoqEAL library -- a library that was devised to verify the correctness of Computer Algebra algorithms. In particular, we use ML4PG to help us in the formalisation of an efficient algorithm to compute the inverse of triangular matrices

    Computing Persistent Homology within Coq/SSReflect

    Full text link
    Persistent homology is one of the most active branches of Computational Algebraic Topology with applications in several contexts such as optical character recognition or analysis of point cloud data. In this paper, we report on the formal development of certified programs to compute persistent Betti numbers, an instrumental tool of persistent homology, using the Coq proof assistant together with the SSReflect extension. To this aim it has been necessary to formalize the underlying mathematical theory of these algorithms. This is another example showing that interactive theorem provers have reached a point where they are mature enough to tackle the formalization of nontrivial mathematical theories

    Towards a framework for the democratisation of deep semantic segmentation models

    Get PDF
    [Abstract] Semantic segmentation models based on deep learning techniques have been successfully applied in several contexts. However, non-expert users might find challenging the use of those techniques due to several reasons, including the necessity of trying different algorithms implemented in heterogeneous libraries, the configuration of hyperparameters, the lack of support of many state-of-the-art algorithms for training them on custom datasets, or the variety of metrics employed to evaluate semantic segmentation models. In this work, we present the first steps towards the development of a framework that facilitates the construction and usage of deep segmentation models.This work was partially supported by Ministerio de Ciencia e Innovación PID2020-115225RB-I00 / AEI / 10.13039/50110001103

    Proyectos de aprendizaje profundo usando datos regionales

    Get PDF
    Debido al impacto de las técnicas de aprendizaje profundo, tanto en entornos industriales como académicos, hay una gran demanda de graduados con habilidades en este campo de la inteligencia artificial. Es por esto que las universidades están comenzando a ofertar asignaturas que incluyen temas relacionados con el aprendizaje profundo. En estas asignaturas, las prácticas son fundamentales; sin embargo, la mayoría de dichas prácticas tienen dos inconvenientes. El primero es el uso de, o bien, “datos de juguete” que sirven para enseñar conceptos pero cuyas soluciones no generalizan a problemas reales; o bien, datos que requieren un conocimiento experto para comprender correctamente su contexto. En segundo lugar, la mayoría de prácticas de aprendizaje profundo se centran en la tarea de entrenar un modelo, y no tienen en cuenta otras tareas, como son la limpieza de los datos o el despliegue de los modelos. En este trabajo presentamos una experiencia en una asignatura de inteligencia artificial donde hemos abordado los problemas anteriores usando datos del gobierno de la comunidad autónoma donde se encuentra nuestra universidad. En concreto, los estudiantes han llevado a cabo diversos proyectos de visión por computador y procesado de lenguaje natural usando técnicas de aprendizaje profundo; por ejemplo, han creado un clasificador de noticias o una aplicación para colorear imágenes antiguas. Compartimos aquí el flujo de trabajo utilizado para organizar la experiencia, las lecciones aprendidas y los retos que pueden encontrarse al intentar llevar a cabo iniciativas similares.Due to the impact of Deep Learning both in industry and academia, there is a growing demand of graduates with skills in this field, and Universities are starting to offer courses that include Deep Learning subjects. Hands-on assignments that teach students how to tackle Deep Learning tasks are an instrumental part of those courses. However, most Deep Learning assignments have two main drawbacks. First, they use either toy datasets that are useful to teach concepts but whose solutions do not generalise to real problems, or employ datasets that require specialised knowledge to fully understand the problem. Secondly, most Deep Learning assignments are focused on training a model, and do not take into account other stages of the Deep Learning pipeline, such as data cleaning or model deployment. In this work, we present an experience in an Artificial Intelligence course where we have tackled the aforementioned drawbacks by using datasets from the regional council where our University is located. Namely, the students of the course have developed several computer vision and natural language processing projects; for instance, a news classifier or an application to colourise historical images. We share the workflow followed to organise this experience, several lessons that we have learned, and challenges that can be faced by other instructors that try to conduct a similar initiative
    corecore